Installation Guide |
您所在的位置:网站首页 › debian apt update › Installation Guide |
Installation Guide
Supported Platforms
The NVIDIA Container Toolkit is available on a variety of Linux distributions and supports different container engines. Note As of NVIDIA Container Toolkit 1.7.0 support for Jetson plaforms is included for Ubuntu 18.04, Ubuntu 20.04, and Ubuntu 22.04 distributions. This means that the installation instructions provided for these distributions are expected to work on Jetson devices. Linux DistributionsSupported Linux distributions are listed below: OS Name / Version Identifier amd64 / x86_64 ppc64le arm64 / aarch64 Amazon Linux 2 amzn2 X X Amazon Linux 2017.09 amzn2017.09 X Amazon Linux 2018.03 amzn2018.03 X Open Suse/SLES 15.0 sles15.0 X Open Suse/SLES 15.x (*) sles15.x X Debian Linux 9 debian9 X Debian Linux 10 debian10 X Debian Linux 11 (#) debian11 X Centos 7 centos7 X X Centos 8 centos8 X X X RHEL 7.x (&) rhel7.x X X RHEL 8.x (@) rhel8.x X X X RHEL 9.x (@) rhel9.x X X X Ubuntu 16.04 ubuntu16.04 X X Ubuntu 18.04 ubuntu18.04 X X X Ubuntu 20.04 (%) ubuntu20.04 X X X Ubuntu 22.04 (%) ubuntu22.04 X X X (*) Minor releases of Open Suse/SLES 15.x are symlinked (redirected) to sles15.1. (#) Debian 11 packages are symlinked (redirected) to debian10. (&) RHEL 7 packages are symlinked (redirected) to centos7 (&) RHEL 8 and RHEL 9 packages are symlinked (redirected) to centos8 (%) Ubuntu 20.04 and Ubuntu 22.04 packages are symlinked (redirected) to ubuntu18.04 Container RuntimesSupported container runtimes are listed below: OS Name / Version amd64 / x86_64 ppc64le arm64 / aarch64 Docker 18.09 X X X Docker 19.03 X X X Docker 20.10 X X X RHEL/CentOS 8 podman X CentOS 8 Docker X RHEL/CentOS 7 Docker X Note On Red Hat Enterprise Linux (RHEL) 8, Docker is no longer a supported container runtime. See Building, Running and Managing Containers for more information on the container tools available on the distribution. Prerequisites NVIDIA DriversBefore you get started, make sure you have installed the NVIDIA driver for your Linux distribution. The recommended way to install drivers is to use the package manager for your distribution but other installer mechanisms are also available (e.g. by downloading .run installers from NVIDIA Driver Downloads). For instructions on using your package manager to install drivers from the official CUDA network repository, follow the steps in this guide. Platform RequirementsThe list of prerequisites for running NVIDIA Container Toolkit is described below: GNU/Linux x86_64 with kernel version > 3.10 Docker >= 19.03 (recommended, but some distributions may include older versions of Docker. The minimum supported version is 1.12) NVIDIA GPU with Architecture >= Kepler (or compute capability 3.0) NVIDIA Linux drivers >= 418.81.07 (Note that older driver releases or branches are unsupported.) Note Your driver version might limit your CUDA capabilities. Newer NVIDIA drivers are backwards-compatible with CUDA Toolkit versions, but each new version of CUDA requires a minimum driver version. Running a CUDA container requires a machine with at least one CUDA-capable GPU and a driver compatible with the CUDA toolkit version you are using. The machine running the CUDA container only requires the NVIDIA driver, the CUDA toolkit doesn’t have to be installed. The CUDA release notes includes a table of the minimum driver and CUDA Toolkit versions. Container Device Interface (CDI) SupportAs of the v1.12.0 release the NVIDIA Container Toolkit includes support for generating Container Device Interface (CDI) specificiations for use with CDI-enabled container engines and CLIs. These include: cri-o containerd podman The use of CDI greatly improves the compatibility of the NVIDIA container stack with certain features such as rootless containers. Step 1: Install NVIDIA Container ToolkitIn order to generate CDI specifications for the NVIDIA devices available on a system, only the base components of the NVIDIA Container Toolkit are required. This means that the instructions for configuring the NVIDIA Container Toolkit repositories should be followed as normal, but instead of installing the nvidia-container-toolkit package, the nvidia-container-toolkit-base package should be installed instead: $ sudo apt-get update \ && sudo apt-get install -y nvidia-container-toolkit-base $ sudo dnf clean expire-cache \ && sudo dnf install -y nvidia-container-toolkit-baseThis should include the NVIDIA Container Toolkit CLI (nvidia-ctk) and the version can be confirmed by running: $ nvidia-ctk --version Step 2: Generate a CDI specificationIn order to generate a CDI specification that refers to all devices, the following command is used: $ sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yamlTo check the names of the generated devices the following command can be run: $ grep " name:" /etc/cdi/nvidia.yamlNote This is run as sudo to ensure that the file at /etc/cdi/nvidia.yaml can be created. If an --output is not specified, the generated specification will be printed to STDOUT. Note Two typical locations for CDI specifications are /etc/cdi/ and /var/run/cdi, but the exact paths may depend on the CDI consumers (e.g. container engines) being used. Note If the device or CUDA driver configuration is changed a new CDI specification must be generated. A configuration change could occur when MIG devices are created or removed, or when the driver is upgraded. Step 3: Using the CDI specificationNote The use of CDI to inject NVIDIA devices may conflict with the use of the NVIDIA Container Runtime hook. This means that if a /usr/share/containers/oci/hooks.d/oci-nvidia-hook.json file exists, it should be deleted or care should be taken to not run containers with the NVIDIA_VISIBLE_DEVICES environment variable set. The use of the CDI specification is dependent on the CDI-enabled container engine or CLI being used. In the case of podman, for example, releases as of v4.1.0 include support for specifying CDI devices in the --device flag. Assuming that the specification has been generated as above, running a container with access to all NVIDIA GPUs would require the following command: $ podman run --rm --device nvidia.com/gpu=all ubuntu nvidia-smi -Lwhich should show the same output as nvidia-smi -L run on the host. The CDI specification also contains references to individual GPUs or MIG devices and these can be requested as desired by specifying their names when launching a container as follows: $ podman run --rm --device nvidia.com/gpu=gpu0 --device nvidia.com/gpu=mig1:0 ubuntu nvidia-smi -LWhere the full GPU with index 0 and the first MIG device on GPU 1 is requested. The output should show only the UUIDs of the requested devices. Docker Getting StartedFor installing Docker CE, follow the official instructions for your supported Linux distribution. For convenience, the documentation below includes instructions on installing Docker for various Linux distributions. Installing on Ubuntu and DebianThe following steps can be used to setup NVIDIA Container Toolkit on Ubuntu LTS (18.04, 20.04, and 22.04) and Debian (Stretch, Buster) distributions. Setting up DockerDocker-CE on Ubuntu can be setup using Docker’s official convenience script: $ curl https://get.docker.com | sh \ && sudo systemctl --now enable dockerSee also Follow the official Docker installation documentation for more details and the post-install documentation. Setting up NVIDIA Container ToolkitSetup the package repository and the GPG key: $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.listNote To get access to experimental features and access to release candidates, you may want to add the experimental branch to the repository listing: $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/experimental/$distribution/libnvidia-container.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.listNote Note that in some cases the downloaded list file may contain URLs that do not seem to match the expected value of distribution which is expected as packages may be used for all compatible distributions. As an examples: For distribution values of ubuntu20.04 or ubuntu22.04 the file will contain ubuntu18.04 URLs For a distribution value of debian11 the file will contain debian10 URLs Note If running apt update after configuring repositories raises an error regarding a conflict in the Signed-By option, see the troubleshooting section. Install the nvidia-container-toolkit package (and dependencies) after updating the package listing: $ sudo apt-get update $ sudo apt-get install -y nvidia-container-toolkitConfigure the Docker daemon to recognize the NVIDIA Container Runtime: $ sudo nvidia-ctk runtime configure --runtime=dockerRestart the Docker daemon to complete the installation after setting the default runtime: $ sudo systemctl restart dockerAt this point, a working setup can be tested by running a base CUDA container: $ sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smiThis should result in a console output shown below: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ Installing on CentOS 7/8The following steps can be used to setup the NVIDIA Container Toolkit on CentOS 7/8. Setting up Docker on CentOS 7/8Note If you’re on a cloud instance such as EC2, then the official CentOS images may not include tools such as iptables which are required for a successful Docker installation. Try this command to get a more functional VM, before proceeding with the remaining steps outlined in this document. $ sudo dnf install -y tar bzip2 make automake gcc gcc-c++ vim pciutils elfutils-libelf-devel libglvnd-devel iptablesSetup the official Docker CE repository: $ sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo $ sudo yum-config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repoNow you can observe the packages available from the docker-ce repo: $ sudo dnf repolist -v $ sudo yum repolist -vSince CentOS does not support specific versions of containerd.io packages that are required for newer versions of Docker-CE, one option is to manually install the containerd.io package and then proceed to install the docker-ce packages. Install the containerd.io package: $ sudo dnf install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.4.3-3.1.el7.x86_64.rpm $ sudo yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.4.3-3.1.el7.x86_64.rpmAnd now install the latest docker-ce package: $ sudo dnf install docker-ce -y $ sudo yum install docker-ce -yEnsure the Docker service is running with the following command: $ sudo systemctl --now enable dockerAnd finally, test your Docker installation by running the hello-world container: $ sudo docker run --rm hello-worldThis should result in a console output shown below: Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 0e03bdcc26d7: Pull complete Digest: sha256:7f0a9f93b4aa3022c3a4c147a449bf11e0941a1fd0bf4a8e6c9408b2600777c5 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/ Setting up NVIDIA Container ToolkitSetup the repository and the GPG key: $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.repo | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repoNote To get access to experimental features and access to release candidates, you may want to add the experimental branch to the repository listing: $ yum-config-manager --enable libnvidia-container-experimentalInstall the nvidia-container-toolkit package (and dependencies) after updating the package listing: $ sudo dnf clean expire-cache --refresh $ sudo yum clean expire-cache $ sudo dnf install -y nvidia-container-toolkit $ sudo yum install -y nvidia-container-toolkitConfigure the Docker daemon to recognize the NVIDIA Container Runtime: $ sudo nvidia-ctk runtime configure --runtime=dockerRestart the Docker daemon to complete the installation after setting the default runtime: $ sudo systemctl restart dockerAt this point, a working setup can be tested by running a base CUDA container: $ sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smiThis should result in a console output shown below: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ Installing on RHEL 7The following steps can be used to setup the NVIDIA Container Toolkit on RHEL 7. Setting up Docker on RHEL 7RHEL includes Docker in the Extras repository. To install Docker on RHEL 7, first enable this repository: $ sudo subscription-manager repos --enable rhel-7-server-extras-rpmsDocker can then be installed using yum $ sudo yum install docker -ySee also More information is available in the KB article 3727511. Ensure the Docker service is running with the following command: $ sudo systemctl --now enable dockerAnd finally, test your Docker installation. We can query the version info: $ sudo docker -vYou should see an output like below: Docker version 1.13.1, build 64e9980/1.13.1And run the hello-world container: $ sudo docker run --rm hello-worldGiving you the following result: Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/ Setting up NVIDIA Container ToolkitSetup the repository and the GPG key: $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.repo | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repoNote To get access to experimental features and access to release candidates, you may want to add the experimental branch to the repository listing: $ yum-config-manager --enable libnvidia-container-experimentalOn RHEL 7, install the nvidia-container-toolkit package (and dependencies) after updating the package listing: $ sudo yum clean expire-cache $ sudo yum install nvidia-container-toolkit -yNote On POWER (ppc64le) platforms, the following package should be used: nvidia-container-hook instead of nvidia-container-toolkit Restart the Docker daemon to complete the installation after setting the default runtime: $ sudo systemctl restart dockerAt this point, a working setup can be tested by running a base CUDA container: $ sudo docker run --rm -e NVIDIA_VISIBLE_DEVICES=all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smiThis should result in a console output shown below: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:1E.0 Off | 0 | | N/A 43C P0 20W / 70W | 0MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+Note Depending on how your RHEL 7 system is configured with SELinux, you may have to use --security-opt=label=disable on the Docker command line to share parts of the host OS that can not be relabeled. Without this option, you may observe this error when running GPU containers: Failed to initialize NVML: Insufficient Permissions. However, using this option disables SELinux separation in the container and the container is executed in an unconfined type. Review the SELinux policies on your system. Installing on SUSE 15The following steps can be used to setup the NVIDIA Container Toolkit on SUSE SLES 15 and OpenSUSE Leap 15. Setting up Docker on SUSE 15To install the latest Docker 19.03 CE release on SUSE 15 (OpenSUSE Leap or SLES), you can use the Virtualization::containers project. First, set up the repository: $ sudo zypper addrepo https://download.opensuse.org/repositories/Virtualization:containers/openSUSE_Leap_15.2/Virtualization:containers.repo \ && sudo zypper refreshInstall the docker package: $ sudo zypper install dockerEnsure the Docker service is running with the following command: $ sudo systemctl --now enable dockerAnd finally, test your Docker installation by running the hello-world container: $ sudo docker run --rm hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 0e03bdcc26d7: Pull complete Digest: sha256:7f0a9f93b4aa3022c3a4c147a449bf11e0941a1fd0bf4a8e6c9408b2600777c5 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/ Setting up NVIDIA Container ToolkitNote You may have to set $distribution variable to opensuse-leap15.1 explicitly when adding the repositories Setup the repository and refresh the package listings $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && sudo zypper ar https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.repoNote To get access to experimental features and access to release candidates, you may want to add the experimental branch to the repository listing: $ zypper modifyrepo --enable libnvidia-container-experimentalInstall the nvidia-container-toolkit package (and dependencies) after updating the package listing: $ sudo zypper refresh $ sudo zypper install -y nvidia-container-toolkitConfigure the Docker daemon to recognise the NVIDIA Container Runtime: $ sudo nvidia-ctk runtime configure --runtime=dockerRestart the Docker daemon to complete the installation after setting the default runtime: $ sudo systemctl restart dockerAt this point, a working setup can be tested by running a base CUDA container: $ sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smiThis should result in a console output shown below: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ Installing on Amazon LinuxThe following steps can be used to setup the NVIDIA Container Toolkit on Amazon Linux 1 and Amazon Linux 2. Setting up Docker on Amazon LinuxAmazon Linux is available on Amazon EC2 instances. For full install instructions, see Docker basics for Amazon ECS. After launching the official Amazon Linux EC2 image, update the installed packages and install the most recent Docker CE packages: $ sudo yum update -yInstall the docker package: $ sudo amazon-linux-extras install dockerEnsure the Docker service is running with the following command: $ sudo systemctl --now enable dockerAnd finally, test your Docker installation by running the hello-world container: $ sudo docker run --rm hello-worldThis should result in a console output shown below: Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 0e03bdcc26d7: Pull complete Digest: sha256:7f0a9f93b4aa3022c3a4c147a449bf11e0941a1fd0bf4a8e6c9408b2600777c5 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/ Setting up NVIDIA Container ToolkitSetup the repository and the GPG key: $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.repo | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repoNote To get access to experimental features and access to release candidates, you may want to add the experimental branch to the repository listing: $ yum-config-manager --enable libnvidia-container-experimentalInstall the nvidia-container-toolkit package (and dependencies) after updating the package listing: $ sudo yum clean expire-cacheConfigure the Docker daemon to recognize the NVIDIA Container Runtime: $ sudo nvidia-ctk runtime configure --runtime=dockerRestart the Docker daemon to complete the installation after setting the default runtime: $ sudo systemctl restart dockerAt this point, a working setup can be tested by running a base CUDA container: $ sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smiThis should result in a console output shown below: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ containerd Getting StartedFor installing containerd, follow the official containerd documentation for your supported Linux distribution. For convenience, the documentation below includes instructions on installing containerd for various Linux distributions supported by NVIDIA. Step 0: Pre-RequisitesTo install containerd as the container engine on the system, install some pre-requisite modules: $ sudo modprobe overlay \ && sudo modprobe br_netfilterYou can also ensure these are persistent: $ cat /dev/nullNow, install the containerd package: $ sudo apt-get update \ && sudo apt-get install -y containerd.ioConfigure containerd with a default config.toml configuration file: $ sudo mkdir -p /etc/containerd \ && sudo containerd config default | sudo tee /etc/containerd/config.tomlTo make use of the NVIDIA Container Runtime, additional configuration is required. The following options should be added to configure nvidia as a runtime and use systemd as the cgroup driver. A patch is provided below: $ cat |
CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3 |